121 research outputs found
Recommended from our members
A deep learning approach to assessing non-native pronunciation of English using phone distances
The way a non-native speaker pronounces the phones of a language
is an important predictor of their proficiency. In grading
spontaneous speech, the pairwise distances between generative
statistical models trained on each phone have been shown to be
powerful features. This paper presents a deep learning alternative
to model-based phone distances in the form of a tunable
Siamese network feature extractor to extract distance metrics directly
from the audio frame sequence. Features are extracted at
the phone instance level and combined to phone-level representations
using an attention mechanism. Pair-wise distances between
phone features are then projected through a feed-forward
layer to predict score. The extraction stage is initialised on either
a binary phone instance-pair classification task, or to mimic
the model-based features, then the whole system is fine-tuned
end-to-end, optimising the learning of the distance metric to
the score prediction task. This method is therefore more adaptable
and more sensitive to phone instance level phenomena. Its
performance is compared agains
A deep learning approach to automatic characterisation of rhythm in non-native English speech
A speaker's rhythm contributes to the intelligibility of their speech and can be characteristic of their language and accent. For non-native learners of a language, the extent to which they match its natural rhythm is an important predictor of their proficiency. As a learner improves, their rhythm is expected to become less similar to their L1 and more to the L2. Metrics based on the variability of the durations of vocalic and consonantal intervals have been shown to be effective at detecting language and accent. In this paper, pairwise variability (PVI, CCI) and variance (varcoV, varcoC) metrics are first used to predict proficiency and L1 of non-native speakers taking an English spoken exam. A deep learning alternative to generalise these features is then presented, in the form of a tunable duration embedding, based on attention over an RNN over durations. The RNN allows relationships beyond pairwise to be captured, while attention allows sensitivity to the different relative importance of durations. The system is trained end-to-end for proficiency and L1 prediction and compared to the baseline. The values of both sets of features for different proficiency levels are then visualised and compared to native speech in the L1 and the L2.ALTA Institut
Low-resource speech recognition and keyword-spotting
© Springer International Publishing AG 2017. The IARPA Babel program ran from March 2012 to November 2016. The aim of the program was to develop agile and robust speech technology that can be rapidly applied to any human language in order to provide effective search capability on large quantities of real world data. This paper will describe some of the developments in speech recognition and keyword-spotting during the lifetime of the project. Two technical areas will be briefly discussed with a focus on techniques developed at Cambridge University: the application of deep learning for low-resource speech recognition; and efficient approaches for keyword spotting. Finally a brief analysis of the Babel speech language characteristics and language performance will be presented
Recommended from our members
Unicode-based graphemic systems for limited resource languages
© 2015 IEEE. Large vocabulary continuous speech recognition systems require a mapping from words, or tokens, into sub-word units to enable robust estimation of acoustic model parameters, and to model words not seen in the training data. The standard approach to achieve this is to manually generate a lexicon where words are mapped into phones, often with attributes associated with each of these phones. Contextdependent acoustic models are then constructed using decision trees where questions are asked based on the phones and phone attributes. For low-resource languages, it may not be practical to manually generate a lexicon. An alternative approach is to use a graphemic lexicon, where the 'pronunciation' for a word is defined by the letters forming that word. This paper proposes a simple approach for building graphemic systems for any language written in unicode. The attributes for graphemes are automatically derived using features from the unicode character descriptions. These attributes are then used in decision tree construction. This approach is examined on the IARPA Babel Option Period 2 languages, and a Levantine Arabic CTS task. The described approach achieves comparable, and complementary, performance to phonetic lexicon-based approaches
Recommended from our members
A language space representation for speech recognition
© 2015 IEEE. The number of languages for which speech recognition systems have become available is growing each year. This paper proposes to view languages as points in some rich space, termed language space, where bases are eigen-languages and a particular selection of the projection determines points. Such an approach could not only reduce development costs for each new language but also provide automatic means for language analysis. For the initial proof of the concept, this paper adopts cluster adaptive training (CAT) known for inducing similar spaces for speaker adaptation needs. The CAT approach used in this paper builds on the previous work for language adaptation in speech synthesis and extends it to Gaussian mixture modelling more appropriate for speech recognition. Experiments conducted on IARPA Babel program languages show that such language space representations can outperform language independent models and discover closely related languages in an automatic way
Automatic detection of accent and lexical pronunciation errors in spontaneous non-native English speech
Detecting individual pronunciation errors and diagnosing pronunciation error tendencies in a language learner based on their speech are important components of computer-aided language learning (CALL). The tasks of error detection and error tendency diagnosis become particularly challenging when the speech in question is spontaneous and particularly given the challenges posed by the inconsistency of human annotation of pronunciation errors. This paper presents an approach to these tasks by distinguishing between lexical errors, wherein the speaker does not know how a particular word is pronounced, and accent errors, wherein the candidate's speech exhibits consistent patterns of phone substitution, deletion and insertion. Three annotated corpora of non-native English speech by speakers of multiple L1s are analysed, the consistency of human annotation investigated and a method presented for detecting individual accent and lexical errors and diagnosing accent error tendencies at the speaker level
Recommended from our members
Automatically Grading Learners’ English Using a Gaussian Process
There is a high demand around the world for the learning of English as a second language. Correspondingly, there is a need to assess the proficiency level of learners both during their studies and for formal qualifications. A number of automatic methods have been proposed to help meet this demand with varying degrees of success. This paper considers the automatic assessment of spoken English proficiency, which is still a challenging problem. In this scenario, the grader should be able to accurately assess the learner’s ability level from spontaneous, prompted, speech, independent of L1 language and the quality of the audio recording. Automatic graders are potentially more consistent than humans. However, the validity of the predicted grade varies. This paper proposes an automatic grader based on a Gaussian process. The advantage of using a Gaussian process is that as well as predicting a grade, it provides a measure of the uncertainty of its prediction. The uncertainty measure is sufficiently accurate to decide which automatic grades should be re-graded by humans. It can also be used to determine which candidates are hard to grade for humans and therefore need expert grading. Performance of the automatic grader is shown to be close to human graders on real candidate entries. Interpolation of human and GP grades further boosts performance.This work was supported by Cambridge English, University of Cambridge.This is the author accepted manuscript. The final version is available from ISCA via http://www.isca-speech.org/archive/slate_2015/sl15_007.htm
Language independent and unsupervised acoustic models for speech recognition and keyword spotting
Copyright © 2014 ISCA. Developing high-performance speech processing systems for low-resource languages is very challenging. One approach to address the lack of resources is to make use of data from multiple languages. A popular direction in recent years is to train a multi-language bottleneck DNN. Language dependent and/or multi-language (all training languages) Tandem acoustic models (AM) are then trained. This work considers a particular scenario where the target language is unseen in multi-language training and has limited language model training data, a limited lexicon, and acoustic training data without transcriptions. A zero acoustic resources case is first described where a multilanguage AM is directly applied, as a language independent AM (LIAM), to an unseen language. Secondly, in an unsupervised approach a LIAM is used to obtain hypotheses for the target language acoustic data transcriptions which are then used in training a language dependent AM. 3 languages from the IARPA Babel project are used for assessment: Vietnamese, Haitian Creole and Bengali. Performance of the zero acoustic resources system is found to be poor, with keyword spotting at best 60% of language dependent performance. Unsupervised language dependent training yields performance gains. For one language (Haitian Creole) the Babel target is achieved on the in-vocabulary data
Recommended from our members
Log-linear system combination using structured support vector machines
Building high accuracy speech recognition systems with limited language resources is a highly challenging task. Although the use of multi-language data for acoustic models yields improvements, performance is often unsatisfactory with highly limited acoustic training data. In these situations, it is possible to consider using multiple well trained acoustic models and combine the system outputs together. Unfortunately, the computational cost associated with these approaches is high as multiple decoding runs are required. To address this problem, this paper examines schemes based on log-linear score combination. This has a number of advantages over standard combination schemes. Even with limited acoustic training data, it is possible to train, for example, phone-specific combination weights, allowing detailed relationships between the available well
trained models to be obtained. To ensure robust parameter estimation, this paper casts log-linear score combination into a structured support vector machine (SSVM) learning task. This yields a method to train model parameters with good generalisation properties. Here the SSVM feature space is a set of scores from well-trained individual systems. The SSVM approach is compared to lattice rescoring and confusion network combination using language packs released within the IARPA Babel program
Incorporating uncertainty into deep learning for spoken language assessment
There is a growing demand for automatic
assessment of spoken English proficiency.
These systems need to handle large vari-
ations in input data owing to the wide
range of candidate skill levels and L1s, and
errors from ASR. Some candidates will
be a poor match to the training data set,
undermining the validity of the predicted
grade. For high stakes tests it is essen-
tial for such systems not only to grade
well, but also to provide a measure of
their uncertainty in their predictions, en-
abling rejection to human graders. Pre-
vious work examined Gaussian Process
(GP) graders which, though successful, do
not scale well with large data sets. Deep
Neural Networks (DNN) may also be used
to provide uncertainty using Monte-Carlo
Dropout (MCD). This paper proposes a
novel method to yield uncertainty and
compares it to GPs and DNNs with MCD.
The proposed approach
explicitly
teaches
a DNN to have low uncertainty on train-
ing data and high uncertainty on generated
artificial data. On experiments conducted
on data from the Business Language Test-
ing Service (BULATS), the proposed ap-
proach is found to outperform GPs and
DNNs with MCD in uncertainty-based re-
jection whilst achieving comparable grad-
ing performance
- …